2. Entropy and Regularization
نویسندگان
چکیده
Minimum MSE plays an indispensable role in learning and adaptation of neural systems. Nevertheless, the instantaneous value of the modeling error alone does not convey sufficient information about the accuracy of the estimated model in representing the underlying structure of the data. In this paper, we propose an extension to the traditional MSE cost function, a regularization term based on the incremental errors in model output. We demonstrate the stochastic equivalence between the proposed regularization term and the error entropy. Finally, we derive an RLS-type algorithm for the proposed cost function, which we call recursive least squares with entropy regularization (RLSER) algorithm. The performance of RLSER is shown to be better than RLS in supervised training with noisy data.
منابع مشابه
Maximum Entropy Density Estimation with Generalized Regularization and an Application to Species Distribution Modeling
We present a unified and complete account of maximum entropy density estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we easily derive performance guarantees for many known regularization types, including `1...
متن کاملA non-extensive maximum entropy based regularization method for bad conditioned inverse problems
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q = 1=2 case. We show that, when the residual principle is considered as constraint, the q = 1=2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which correspo...
متن کاملMaximum Entropy Distribution Estimation with Generalized Regularization
We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we can easily derive performance guarantees for many known regularization types, inc...
متن کاملIncremental Feature Selection and l1 Regularization for Relaxed Maximum-Entropy Modeling
We present an approach to bounded constraintrelaxation for entropy maximization that corresponds to using a double-exponential prior or `1 regularizer in likelihood maximization for log-linear models. We show that a combined incremental feature selection and regularization method can be established for maximum entropy modeling by a natural incorporation of the regularizer into gradientbased fea...
متن کاملEquivalence of Entropy Regularization and Relative-Entropy Proximal Method
We consider two entropy-based interior point methods that solve LP relaxations of MAP estimation in graphical models: (1) an entropy-regularization method and (2) a relative-entropy proximal method. Using the fact that relative-entropy is the Bregman distance induced by entropy, we show that the two approaches are actually equivalent. The purpose of this note is to show one connection between t...
متن کاملRegularization and Topological Entropy for the Spatial N-center Problem
We show that the n center problem in R 3 has positive topological entropy for n 3. The proof is based on global regularization of sin-gularities and the results of Gromov and Paternain on the topological entropy of geodesic ows. The n-center problem in S 3 is also studied.
متن کامل